95 research outputs found

    Robust Obstacle Detection based on Dense Disparity Maps

    Get PDF
    Obstacle detection is an important component for many autonomous vehicle navigation systems. Several methods for obstacle detection have been proposed using various active sensors such as radar, sonar and laser range finders. Vision based techniques have the advantage of low cost and provide a large amount of information about the environment around an intelligent vehicle. This paper deals with the development of an accurate and efficient vision based obstacle detection method which relies on a wavelet analysis. The development system will be integrated on the Cybercar platform which is a road vehicle with fully automated driving capabilities

    Templates for Convex Cone Problems with Applications to Sparse Signal Recovery

    Full text link
    This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal first-order method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with state-of-the-art methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient large-scale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This version has updated reference

    Convergence and Perturbation Resilience of Dynamic String-Averaging Projection Methods

    Full text link
    We consider the convex feasibility problem (CFP) in Hilbert space and concentrate on the study of string-averaging projection (SAP) methods for the CFP, analyzing their convergence and their perturbation resilience. In the past, SAP methods were formulated with a single predetermined set of strings and a single predetermined set of weights. Here we extend the scope of the family of SAP methods to allow iteration-index-dependent variable strings and weights and term such methods dynamic string-averaging projection (DSAP) methods. The bounded perturbation resilience of DSAP methods is relevant and important for their possible use in the framework of the recently developed superiorization heuristic methodology for constrained minimization problems.Comment: Computational Optimization and Applications, accepted for publicatio

    Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

    Get PDF
    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to ÎĽ\mu-strongly convex objective functions with LL-Lipschitz continuous gradient. In the framework of Nesterov both ÎĽ\mu and LL are assumed known -- an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient ÎĽ\mu and LL during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. The results show that for ill-conditioned problems solved to high accuracy, the proposed method significantly outperforms state-of-the-art first-order methods, as also suggested by theoretical results.Comment: 23 pages, 4 figure

    Fractal Image Coding as Projections Onto Convex Sets

    Full text link
    Abstract. We show how fractal image coding can be viewed and gen-eralized in terms of the method of projections onto convex sets (POCS). In this approach, the fractal code denes a set of spatial domain sim-ilarity constraints. We also show how such a reformulation in terms of POCS allows additional contraints to be imposed during fractal image decoding. Two applications are presented: image construction with an incomplete fractal code and image denoising.

    Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints

    Full text link
    Regularization of ill-posed linear inverse problems via â„“1\ell_1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an â„“1\ell_1 penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implementation to â„“1\ell_1-constraints, using a gradient method, with projection on â„“1\ell_1-balls. The corresponding algorithm uses again iterative soft-thresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the (linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration.Comment: 24 pages, 5 figures. v2: added reference, some amendments, 27 page

    Prox-regularity of rank constraint sets and implications for algorithms

    Get PDF
    We present an analysis of sets of matrices with rank less than or equal to a specified number ss. We provide a simple formula for the normal cone to such sets, and use this to show that these sets are prox-regular at all points with rank exactly equal to ss. The normal cone formula appears to be new. This allows for easy application of prior results guaranteeing local linear convergence of the fundamental alternating projection algorithm between sets, one of which is a rank constraint set. We apply this to show local linear convergence of another fundamental algorithm, approximate steepest descent. Our results apply not only to linear systems with rank constraints, as has been treated extensively in the literature, but also nonconvex systems with rank constraints.Comment: 12 pages, 24 references. Revised manuscript to appear in the Journal of Mathematical Imaging and Visio

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    • …
    corecore